Skip to main content

Linear Combinations and Related Concepts

Previously, we introduced the two fundamental operations of vectors: addition and scalar multiplication.

In this page, we will discuss the concept of a linear combination, which is a way to combine vectors using scalar multiplication and addition. Additionally, we will touch on the concepts of span, linear independence, and basis, which are all related to linear combinations.

At first, these terms might seem abstract, disconnected, or even intimidating, but we will show that we've actually been using these concepts under the hood all along.

Table of Contents

The Basis Vectors

Previously, we discussed how vectors can be represented as arrows in a coordinate system. For example, in 2D space, we can represent a vector as an arrow from the origin to the point , where is the -component of the vector and is the -component. Then:

If a vector is equal to, say, , we can represent it as an arrow from the origin to the point . This means "go 3 units to the right and 4 units up".

However, there's another way to interpret this vector. Imagine that we have a vector that points 1 unit in the direction and a vector that points 1 unit in the direction. Then, a vector can be represented as adding 3 copies of and 4 copies of :

The vectors and are called basis vectors.

Changing the Basis

It's important to note that our choice of basis vectors is arbitrary. We can actually choose any two vectors to be our basis vectors*. For example, let and be the basis vectors of some space. We can still represent any vector by scaling and adding these basis vectors:

In our case, we had a vector . In the new basis, this vector is represented as . In other words:

The important takeaway is that when we represent vectors as these lists of numbers, we are implicitly choosing some basis vectors.

Generalized Notation

In higher dimensions, we will need more basis vectors. Of course, we will eventually run out of letters in the alphabet, so we use a more generalized notation.

Sometimes, we use to represent the basis vectors for an -dimensional space. For example, in 3D space, we can represent a vector as:

In -dimensional space, we can then represent a vector as a sum:

These will not be used as often as the notation, but they are useful for more abstract discussions regarding vector spaces and related concepts.

Linear Combinations

Let's summarize what we've discussed so far.

A basis is a set of vectors that can be used to represent any vector in a space. We can represent any vector by scaling and adding these basis vectors. We can say that a vector can be represented as .

The process of scaling and adding vectors is called a linear combination. A linear combination of the vectors would be a vector of the form:

By changing the scalars , we can create different vectors.

Span of Vectors

Consider two vectors and in the -plane. How many vectors can we create by scaling and adding these two vectors?

  • For most pairs of vectors, linear combinations of them will fill up the entire plane (check out the interactive in the previous section).

  • If the two vectors are parallel, then the linear combinations will only fill up a line.

  • If both vectors are zero, then the linear combinations will only fill up the origin.

Generally, the set of all possible linear combinations of a set of vectors is called the span of those vectors. Denoted as , it is the set of all vectors that can be represented as linear combinations of .

For example, the span of the vectors and is the entire 2-dimensional -plane, so we can write:

Example Problem: Computing the Span

Determine the span of the vectors and .

To determine the span of these vectors, we need to find all possible linear combinations of these vectors. This means that we want to find all vectors of the form . Set this to be equal to any vector :

This creates a system of equations:

We can use any method to solve this system of equations. For instance, we can multiply the first equation by 2 and subtract the second equation:

Then, we can substitute this back into the first equation to get :

Hence we have:

What this means is that for any vector , we have valid values for and . Hence, the span of the vectors and is the entire 2D plane .

3D Spaces

In 3D space, we can represent vectors as a linear combination of three basis vectors: . For example, the vector can be represented as .

The span of two vectors in 3D space (if they are not parallel) will fill up a plane.

Using the sliders above, you can see how the linear combination of two vectors fills up a plane in 3D space. If you keep one of the coefficients constant (i.e. only one vector moves), it traces out a line.

If we have three vectors in 3D space, the span of these vectors could fill up all of 3D space. Imagine the first two vectors tracing out a plane, and the third vector moving outside of the plane, allowing it to fill up all of 3D space. However, if the third vector already lies in the plane, it doesn't add any new directions, and the span of the three vectors will only fill up the plane:

In the visualization above, notice that the green vector already lies in the plane formed by the red and blue vectors, so it doesn't add any new directions.

Linear Independence

Recall what we discussed about the span of vectors in 3D space: if the third vector lies in the plane formed by the first two vectors, it doesn't add any new directions. This means that the third vector is inside the span of the first two vectors; it can be represented as a linear combination of the first two vectors.

Another example is if we have two vectors that are parallel (collinear). In this case, the span of the two vectors will only fill up a line, and the second vector can be written as a scalar multiple of the first vector.

In other words, removing one vector does not change the span of the vectors - we call this set of vectors linearly dependent.

On the other hand, if the third vector is outside the plane formed by the first two vectors, it adds a new direction to the span. In this case, the third vector is not a linear combination of the first two vectors, and removing it will change the span of the vectors - we call this set of vectors linearly independent.

For example, given two vectors and , they are linearly dependent if can be written as for some scalar . If, however, cannot be written as for any scalar , then the vectors are linearly independent.

For three vectors, it's the same thing. Given , they are linearly dependent if can be written as a linear combination of and , and the same for the other two pairs.

Example Problem: Linear Independence and the Zero Vector

Consider the set of vectors , and assume them to be linearly independent. Is there a linear combination of these vectors that gives the zero vector?

If the vectors are linearly independent, then no vector can be written as a linear combination of the other two. Consider the linear combination . If we rearrange, we get , which means that can be written as a linear combination of and , which contradicts our assumption.

Thus:

If a set of vectors are linearly independent, the only linear combination that gives the zero vector is the trivial one, where all the coefficients are zero.

Example Problem: Determining Linear Independence

Determine the linear independence of the vectors .

As discussed in the previous example, if the vectors are linearly dependent, that means we can make a linear combination of the vectors that gives the zero vector:

This creates a system of equations:

We have two equations and three unknowns, but since we just want to find nonzero solutions, we can simply set a value for one of the coefficients. Let :

Think about what this means. If we set , we have a valid solution where . This means that we have a valid linear combination of the vectors (with nonzero coefficients) that gives the zero vector, which means that the vectors are linearly dependent.

Example Problem: Determining Linear Independence (Logical Approach)

Determine the linear independence of the vectors . Do not calculate it out, just think about it logically.

We can use some logical reasoning to show that this set of vectors cannot be linearly independent.

Let's assume the first two vectors are linearly independent. That means that they span the entire 2D plane .

However, notice that the third vector lives in , meaning the third vector is within the span of the first two vectors. Hence, the set of vectors is linearly dependent.

Formal Definition of Basis

Previously, we discussed how a basis is a set of vectors that can be used to represent any vector in a space. We can represent any vector by scaling and adding these basis vectors. This should remind us of the concept of a linear combination. We can say that a vector can be represented as .

Hence, for a set of vectors to be a basis, if we want to represent any vector as a linear combination of the basis vectors, they must span the entire space.

Additionally, we want only one way to represent a vector. This means that they must be linearly independent.

Therefore, we can make a more formal definition of a basis:

A set of vectors is a basis for a vector space if they are linearly independent and .

As previously stated, we will not delve much into abstract vector spaces, but this definition is crucial for understanding the concept of a basis.

Summary and Next Steps

In this page, we discussed the concept of a linear combination, which is a way to combine vectors using scalar multiplication and addition. Additionally, we touched on the concepts of span, linear independence, and basis, which are all related to linear combinations.

Here are the key points to remember:

  • A basis is a set of vectors that can be used to represent any vector in a space. Formally, a set of vectors is a basis if they are linearly independent and span the entire space.
  • A linear combination of vectors is a way to combine vectors using scalar multiplication and addition.
  • The span of a set of vectors is the set of all possible linear combinations of those vectors.
  • A set of vectors is linearly independent if no vector can be written as a linear combination of the other vectors.
  • If a set of vectors is linearly independent, the only linear combination that gives the zero vector is the trivial one, where all the coefficients are zero.

Previously, we discussed the two fundamental operations of vectors: addition and scalar multiplication. Have you ever wondered how we can multiply two vectors together? In the next section, we will introduce a new vector operation known as the dot product.

Appendix

Why Do n Linearly Independent Vectors Span n-Dimensional Space?

In the main text, we mentioned that if we have linearly independent vectors in an -dimensional space, they will span the entire space. For example, in 2D space, if we have two linearly independent vectors and , then .

Can we prove this statement?

Begin by writing the theorem in a more formal way:

Theorem: If is an -dimensional vector space and are linearly independent vectors in , then .

We can prove this theorem by contradiction.

  1. Let's first assume the opposite: that the span of the vectors does not fill up the entire space. This means that , meaning there is some vector that cannot be written as a linear combination of .
  2. Next, consider adding to the set of vectors: , and call this set . Remember, cannot be written as a linear combination of the other vectors. Thus, the set is linearly independent.
  3. However, we have vectors in an -dimensional space, which is a contradiction. This is because we cannot have more than linearly independent vectors in an -dimensional space. Therefore, our assumption that the span of the vectors does not fill up the entire space is false.
  4. Hence, the span of the vectors must fill up the entire space.